Search Results: "sge"

28 October 2017

Russ Allbery: Review: Provenance

Review: Provenance, by Ann Leckie
Publisher: Orbit
Copyright: September 2017
ISBN: 0-316-38863-7
Format: Kindle
Pages: 448
In a rather desperate attempt to please her mother, Ingray has spent every resource she has on extracting the son of a political enemy from Compassionate Removal (think life imprisonment with really good marketing). The reason: vestiges, a cultural touchstone for Ingray's native planet of Hwae. These are invitation cards, floor tiles, wall panels, or just about anything that can be confirmed to have been physically present at an important or historical moment, or in the presence of a famous figure. The person Ingray is retrieving supposedly pulled off the biggest theft of vestiges in history. If she can locate them, it would be a huge coup for her highly-placed politician mother, and the one time she would be victorious in her forced rivalry with her brother. About the best thing that could be said for this plan is that it's audacious. The first obstacle is the arrival of the Geck on the station for a Conclave for renegotiation of the treaty with the Presger, possibly the most important thing going on in the galaxy at the moment, which strands her there without money for food. The second is that the person she has paid so much to extract from Compassionate Removal says they aren't the person she was looking for at all, and are not particularly interested in going with her to Hwae. Only a bit of creative thinking in the face of a visit from the local authorities, and the unexpected kindness of the captain from whom she booked travel, might get her home with the tatters of her plan intact. But she's clearly far out of her depth. Provenance is set in the same universe as Ancillary Justice and its sequels, but it is not set in the empire of the Radchaai. This is another human world entirely, one with smaller and more provincial concerns. The aftermath of Ancillary Mercy is playing out in the background (so do not, on risk of serious spoilers, read the start of this book without having read the previous trilogy), but this is in no way a sequel. Neither the characters nor the plot are involved in that aftermath. It's a story told at a much smaller scale, about two political families, cut-throat maneuvering, horrible parenting, the inexplicable importance of social artifacts, the weirdness of human/alien relations, and the merits of some very unlikely allies. Provenance is a very different type of story than Ancillary Justice, and Ingray is a very different protagonist. The shape of the plot reminded me of one of Lois McMaster Bujold's Miles Vorkosigan stories: hair-brained ideas, improvisation, and unlikely allies. But Ingray couldn't be more different than Miles. She starts the book overwhelmed, despairing, and not at all manic, and one spends the first part of the story feeling sorry for her and becoming quite certain that everything will go horribly wrong. The heart of this book is the parallel path Leckie takes the reader and the characters along as they discover just what Ingray's true talents and capabilities are. It's a book about being hopelessly bad at things one was pressured towards being good at, while being quietly and subtly good at the skills that let one survive a deeply dysfunctional family. There are lots of books with very active protagonists, and a depressing number of books with passive protagonists pushed around by the plot. There are very few books that pull off the delicate characterization that Leckie manages here: a protagonist who is rather hopeless at taking charge of the plot in the way everyone wants (but doesn't particularly expect) her to, but who charts her own path through the plot in an entirely unexpected way. It's a story that grows on you. The plot rhythm never works in quite the way one expects from other books, but it builds its own logic and its own rhythm, and reached a very emotionally satisfying conclusion. The Radchaai, or at least one Radchaai citizen, do show up eventually, providing a glimmer of outside view at the Ancillary Justice world. Even better, the Geck play a significant role. I adore Leckie's aliens: they're strange and confusing, but in a refreshingly blunt way rather than abusing gnomic utterances and incomprehensible intelligence. And the foot-stomping of the spider bot made me laugh every time. The stakes are a lot lower here than in Ancillary Justice, and Ingray isn't the sort of character who's going to change the world. But that's okay; indeed, one of the points of this book is why and how that's okay. I won't lie: I'd love more Breq, and I hope we eventually get an exploration of the larger consequences of her story. But this is a delightful story that made me happy and has defter character work than most SF being written. Recommended, but read the Ancillary trilogy first. One minor closing complaint, which didn't change my experience of the book but which I can't help quibbling about: I'm completely onboard with the three-gender system that Leckie uses for the Hwae (I wish more SF authors would play with social as well as technological ideas), and I think she wove it deftly into the story, but I wish she hadn't used Spivak pronouns for the third gender. (e/em/eir, for those who aren't familiar.) Any of the other gender-neutral pronouns look better to me and cause fewer problems for my involuntary proofreader. I prefer zie/zir for personal reasons, but sie/hir, zhe/zhim/zher, or even thon or per would read more smoothly. Eir is fine, but em looks like 'em and throws my brain into dialect mode and forces a re-parse, and e just looks like a typo. I know from lots of Usenet discussions of pronouns that I'm not the only one who has that reaction to Spivak. But it's a very minor nit. Rating: 8 out of 10

28 September 2017

Matthias Klumpp: Adding fonts to software centers

Last year, the AppStream specification gained proper support for adding metadata for fonts, after Richard Hughes did some work on it years ago. We weren t happy with how fonts were handled at that time, so we searched for better solutions, which is why this took a bit longer to be done. Last year, I was implementing the final support for fonts in both appstream-generator (the metadata extractor used by Debian and a few others) as well as the AppStream specification. This blogpost was sitting on my todo list as a draft for a long time now, and I only just now managed to finish it, so sorry for announcing this so late. Fonts are already available via AppStream for a year, and this post just sums up the status quo and some neat tricks if you want to write metainfo files for fonts. If you are following AppStream (or the Debian fonts list), you know everything already  . Both Richard and I first tried to extract all the metadata to display fonts in a proper way to the users from the font files directly. This turned out to be very difficult, since font metadata is often wrong or incomplete, and certain desirable bits of metadata (like a longer description) are missing entirely. After messing around with different ways to solve this for days (afterall, by extracting the data from font files directly we would have hundreds of fonts directly available in software centers), I also came to the same conclusion as Richard: The best and easiest solution here is to mandate the availability of metainfo files per font. Which brings me to the second issue: What is a font? For any person knowing about fonts, they will understand one font as one font face, e.g. Lato Regular Italic or Lato Bold . A user however will see the font family as a font, e.g. just Lato instead of all the font faces separated out. Since AppStream data is used primarily by software centers, we want something that is easy for users to understand. Hence, an AppStream font components really describes a font family or collection of fonts, instead of individual font faces. We do also want AppStream data to be useful for system components looking for a specific font, which is why font components will advertise the individual font face names they contain via a
<provides/>
-tag. Naming fonts and making them identifiable is a whole other issue, I used a document from Adobe on font naming issues as a rough guideline while working on this. How to write a good metainfo file for a font is best shown with an example. Lato is a well-looking font family that we want displayed in a software center. So, we write a metainfo file for it an place it in
/usr/share/metainfo/com.latofonts.Lato.metainfo.xml
for the AppStream metadata generator to pick up:
<?xml version="1.0" encoding="UTF-8"?>
<component type="font">
  <id>com.latofonts.Lato</id>
  <metadata_license>FSFAP</metadata_license>
  <project_license>OFL-1.1</project_license>
  <name>Lato</name>
  <summary>A sanserif type face fam ily</summary>
  <description>
    <p>
      Lato is a sanserif type face fam ily designed in the Sum mer 2010 by Warsaw-based designer
       ukasz Dziedzic ( Lato  means  Sum mer  in Pol ish). In Decem ber 2010 the Lato fam ily
      was pub lished under the open-source Open Font License by his foundry tyPoland, with
      sup port from Google.
    </p>
  </description>
  <url type="homepage">http://www.latofonts.com/</url>
  <provides>
    <font>Lato Regular</font>
    <font>Lato Black Italic</font>
    <font>Lato Black</font>
    <font>Lato Bold Italic</font>
    <font>Lato Bold</font>
    <font>Lato Hairline Italic</font>
    ...
  </provides>
</component>
When the file is processed, we know that we need to look for fonts in the package it is contained in. So, the appstream-generator will load all the fonts in the package and render example texts for them as an image, so we can show users a preview of the font. It will also use heuristics to render an icon for the respective font component using its regular typeface. Of course that is not ideal what if there are multiple font faces in a package? What if the heuristics fail to detect the right font face to display? This behavior can be influenced by adding
<font/>
tags to a
<provides/>
tag in the metainfo file. The font-provides tags should contain the fullnames of the font faces you want to associate with this font component. If the font file does not define a fullname, the family and style are used instead. That way, someone writing the metainfo file can control which fonts belong to the described component. The metadata generator will also pick the first mentioned font name in the
<provides/>
list as the one to render the example icon for. It will also sort the example text images in the same order as the fonts are listed in the provides-tag. The example lines of text are written in a language matching the font using Pango. But what about symbolic fonts? Or fonts where any heuristic fails? At the moment, we see ugly tofu characters or boxes instead of an actual, useful representation of the font. This brings me to an inofficial extension to font metainfo files, that, as far as I know, only appstream-generator supports at the moment. I am not happy enough with this solution to add it to the real specification, but it serves as a good method to fix up the edge cases where we can not render good example images for fonts. AppStream-Generator supports the FontIconText and FontSampleText custom AppStream properties to allow metainfo file authors to override the default texts and autodetected values. FontIconText will override the characters used to render the icon, while FontSampleText can be a line of text used to render the example images. This is especially useful for symbolic fonts, where the heuristics usually fail and we do not know which glyphs would be representative for a font. For example, a font with mathematical symbols might want to add the following to its metainfo file:
<custom>
  <value key="FontIconText"> </value>
  <value key="FontSampleText">       ...         </value>
</custom>
Any unicode glyphs are allowed, but asgen will but some length restrictions on the texts. So, In summary:

1 June 2017

Paul Wise: FLOSS Activities May 2017

Changes

Issues

Review

Administration
  • Debian: discuss mail bounces with a hoster, check perms of LE results, add 1 user to a group, re-sent some TLS cert expiry mail, clean up mail bounce flood, approve some debian.net TLS certs, do the samhain dance thrice, end 1 samhain mail flood, diagnose/fix LDAP update issue, relay DebConf cert expiry mails, reboot 2 non-responsive VM, merged patches for debian.org-sources.debian.org meta-package,
  • Debian mentors: lintian/security updates & reboot
  • Debian wiki: delete stray tmp file, whitelist 14 email addresses, disable 1 accounts with bouncing email, ping 3 persons with bouncing email
  • Debian website: update/push index/CD/distrib
  • Debian QA: deploy my changes, disable some removed suites in qadb
  • Debian PTS: strip whitespace from existing pages, invalidate sigs so pages get a rebuild
  • Debian derivatives census: deploy changes
  • Openmoko: security updates & reboots.

Communication
  • Invite Purism (on IRC), XBian (also on IRC), DuZeru to the Debian derivatives census
  • Respond to the shutdown of Parsix
  • Report BlankOn fileserver and Huayra webserver issues
  • Organise a transition of Ubuntu/Endless Debian derivatives census maintainers
  • Advocate against Debian having a monopoly on hardware certification
  • Advocate working with existing merchandise vendors
  • Start a discussion about Debian membership in other organisations
  • Advocate for HPE to join the LVFS & support fwupd

Sponsors All work was done on a volunteer basis.

17 April 2017

Ross Gammon: My March 2017 Activities

March was a busy month, so this monthly report is a little late. I worked two weekends, and I was planning my Easter holiday, so there wasn t a lot of spare time. Debian
  • Updated Dominate to the latest version and uploaded to experimental (due to the Debian Stretch release freeze).
  • Uploaded the latest version of abcmidi (also to experimental).
  • Pinged the bugs for reverse dependencies of pygoocanvas and goocanvas with a view to getting them removed from the archive during the Buster cycle.
  • Asked for help on the Ubuntu Studio developers and users mailing lists to test the coming Ubuntu Studio 17.04 release ISO, because I would be away on holiday for most of it.
Ubuntu
  • Worked on ubuntustudio-controls, reverting it back to an earlier revision that Len said was working fine. Unfortunately, when I built and installed it from my ppa, it crashed. Eventually found my mistake with the bzr reversion, fixed it and prepared an upload ready for sponsorship. Submitted a Freeze Exception bug in the hope that the Release Team would accept it even though we had missed the Final Beta.
  • Put a new power supply in an old computer that was kaput, and got it working again. Set up Ubuntu Server 16.04 on it so that I could get a bit more experience with running a server. It won t last very long, because it is a 32 bit machine, and Ubuntu will probably drop support for that architecture eventually. I used two small spare drives to set up RAID 1 & LVM (so that I can add more space to it later). I set up some Samba shares, so that my wife will be able to get at them from her Windows machine. For music streaming, I set up Emby Server. I wold be great to see this packaged for Debian. I uploaded all of my photos and music for Emby to serve around the home (and remotely as well). Set up Obnam to back up the server to an external USB stick (temporarily until I set up something remote). Set LetsEncrypt with the wonderful Certbot program.
  • Did the Release Notes for Ubuntu Studio 17.04 Final Beta. As I was in Brussels for two days, I was not able to do any ISO testing myself.
Other
  • Measured up the new model railway layout and documented it in xtrkcad.
  • Started learning Ansible some more by setting up ssh on all my machines so that I could access them with Ansible and manipulate them using a playbook.
  • Went to the Open Source Days conference just down the road in Copenhagen. Saw some good presentations. Of interest for my previous work in the Debian GIS Team, was a presentation from the Danish Municipalities on how they run projects using Open Source. I noted how their use of Proj 4 and OSGeo. I was also pleased to see a presentation from Ximin Luo on Reproducible Builds, and introduced myself briefly after his talk (during the break).
  • Started looking at creating a Django website to store and publish my One Name Study sources (indexes). Started by creating a library to list some of my recently read Journals. I will eventually need to import all the others I have listed in a cvs spreadsheet that was originally exported from the commercial (Windows only) Custodian software.
Plan status from last month & update for next month Debian For the Debian Stretch release:
  • Keep an eye on the Release Critical bugs list, and see if I can help fix any. In Progress
Generally:
  • Package all the latest upstream versions of my Debian packages, and upload them to Experimental to keep them out of the way of the Stretch release. In Progress
  • Begin working again on all the new stuff I want packaged in Debian.
Ubuntu
  • Start working on an Ubuntu Studio package tracker website so that we can keep an eye on the status of the packages we are interested in. Started
  • Start testing & bug triaging Ubuntu Studio packages. In progress
  • Test Len s work on ubuntustudio-controls Done
  • Do the Ubuntu Studio Zesty 17.04 Final Beta release. Done
  • Sort out the Blueprints for the coming Ubuntu Studio 17.10 release cycle.
Other
  • Give JMRI a good try out and look at what it would take to package it. In progress
  • Also look at OpenPLC for simulating the relay logic of real railway interlockings (i.e. a little bit of the day job at home involving free software fun!). In progress

6 February 2017

Wouter Verhelst: FOSDEM 2017 is finished...

... but that doesn't mean the work is over. One big job that needs to happen after the conference is to review and release the video recordings that were made. With several hundreds of videos to be checked and only a handful of people with the ability to do so, review was a massive job that for the past three editions took several months; e.g., in 2016 the last video work was done in July, when the preparation of the 2017 edition had already started. Obviously this is suboptimal, and therefore another solution was required. After working on it for quite a while (in my spare time), I came up with SReview, a video review and transcoding system written in Perl. An obvious question that could be asked is why I wrote yet another system, though, and did not use something that already existed. The short answer to that is "because what's there did not exactly do what I wanted to". The somewhat longer answer also involves the fact that I felt like writing something from scratch. The full story, however, is this: there isn't very much out there, and what does exist is flawed in some ways. I am aware of three other review systems that are or were used by other conferences:
  1. A bunch of shell scripts that were written by the DebConf video team and hooked into the penta database. Nobody but DebConf ever used it. It allowed review via an NFS share and a webinterface, and required people to watch .dv files directly from the filesystem in a media player. For this and other reasons, it could only ever be used from the conference itself. If nothing else, that final limitation made it impossible for FOSDEM to use it, but even if that wasn't the case it was still too basic to ever be useful for a conference the size of FOSDEM.
  2. A review system used by the CCC "voc" team. I've never actually seen it in use, but I've heard people describe it. It involves a complicated setup of Samba servers, short MPEG transport stream segments, a FUSE filesystem, and kdenlive, which took someone several days to set up as an experiment back at DebConf15. Critically, important parts of it are also not licensed as free software, which to me rules it out for a tool in support of FOSDEM. Even if that wasn't the case, however, I'm still not sure it would be ideal; this system requires intimate knowledge of how it works from its user, which makes it harder for us to crowdsource the review to the speaker, as I had planned to.
  3. Veyepar. This one gets many things right, and we used it for video review at DebConf from DebConf14 onwards, as well as FOSDEM 2014 (but not 2015 or 2016). Unfortunately, it also gets many things wrong. Most of these can be traced back to the fact that Carl, as he freely admits, is not a programmer; he's more of a sysadmin type who also manages to cobble together a few scripts now and then. Some of the things it gets wrong are minor issues that would theoretically be fixable with a minimal amount of effort; others would be more involved. It is also severely underdocumented, and so as a result it is rather tedious for someone not very familiar with the system to be able to use it. On a more personal note, veyepar is also written in the wrong language, so while I might have spent some time improving it, I ended up starting from scratch.
Something all these systems have in common is that they try to avoid postprocessing as much as possible. This only makes sense; if you have to deal with loads and loads of video recordings, having to do too much postprocessing only ensures that it won't get done... Despite the issues that I have with it, I still think that veyepar is a great system, and am not ashamed to say that SReview borrows many ideas and concepts from it. However, it does things differently in some areas, too:
  1. A major focus has been on making the review form be as easy to use as possible. While there is still room for improvement (and help would certainly be welcome in that area from someone with more experience in UI design than me), I think the SReview review form is much easier to use than the veyepar one (which has so many options that it's pretty hard to understand sometimes).
  2. SReview assumes that as soon as there are recordings in a given room sufficient to fill all the time that a particular event in that room was scheduled for, the whole event is available. It will then generate a first rough cut, and send a notification to the speaker in question, as well as the people who organized the devroom. The reviewer will then almost certainly be required to request a second (and possibly third or fourth) cut, but I think the advantage of that is that it makes the review workflow be more intuitive and easier to understand.
  3. Where veyepar requires one or more instances of per-state scripts to be running (which will then each be polling the database and just start a transcode or cut or whatever script as needed), SReview uses a single "dispatch" script, which needs to be run once for the whole system (if using an external scheduler) or once per core that may be used (if not using an external scheduler), and which does all the database polling required. The use of an external scheduler seemed more appropriate, given that things like gridengine exist; gridengine is a job scheduler which allows one to submit a job to be ran on any node in a cluster, along with the resources that this particular job requires, and which will then either find an appropriate node to run the job on, or will put the job in a "pending" state until the required resources can be found. This allows me to more easily add extra encoding capacity when required, and allows me to also do things like allocate less resources to a particular part of the whole system, even while jobs are already running, without necessarily needing to abort jobs that might be using those resources.
The system seems to be working fine, although there's certainly still room for improvement. I'm thinking of using it for DebConf17 too, and will therefore probably work on improving it during DebCamp. Additionally, the experience of using it for FOSDEM 2017 has given me ideas of where to improve it further, so it can be used more easily by different parties, too. Some of these have been filed as issues against a "1.0" milestone on github, but others are only newly formed in my gray matter and will need some thinking through before they can be properly implemented. Certainly, it looks like this will be something that'll give me quite some fun developing further. In the mean time, if you're interested in the state of a particular video of FOSDEM 2017, have a look at the video overview page, which lists all talks along with their review/transcode status. Also, if you were a speaker or devroom organizer at FOSDEM 2017, please check your mailbox and review your talk! With your help, we should hopefully be able to release all our videos by the end of the week. Update (2017-02-06 17:18): clarified my position on the qualities of some of the other systems after feedback from people who were a bit disappointed by my description of them... and which was fair enough. Apologies. Update (2017-02-08 16:26): Fixes to the c3voc stuff after feedback from them.

29 December 2016

Sven Hoexter: Out of the comfort zone: OpenSuSE support for an ordinary user - f*ck my morals

A friend of mine choose for $reasons to install the latest OpenSuSE 42.2 release as his new laptop operating system. It's been a while that I had contact with the SuSE Linux distribution. Must be around 12 years or so. The unsual part here is that I've to support a somewhat eccentric, but mostly ordinary user of computers. And to my surprise it's still hard to just plug in your existing stuff and expect it work. I've done so many dirty things to this installation in the last three days, my system egineering heart is bleeding. printing with a Canon Pixma iP100 printer This is a small portable Canon printer, about four years old. It provides a decent quality and its main strength is that it's small and really portable. Sadly the gutenprint driver just pushes through a blank page. No ink wasted on it at all. So the only reasonable other choice was a four year old binary rpm package provided by Canon. It has a file dependency on "libtiff.so.3" which is no longer available in recent GNU/Linux distributions. So I cheated and
- unpacked the tarball
- installed the rpm from the "packages" folder
zypper install cnijfilter-common-3.70-1.x86_64.rpm cnijfilter-ip100series-3.70-1.x86_64.rpm
... and choose to ignore the missing file dependency on libtiff.so.5.
ln -s /usr/lib64/libtiff.so /usr/lib64/libtiff.so.5
- re-ran the ./install.sh which registered the printer with cups and does whatever else
  magic is included in 1906 lines of shell.
To my surprise this driver still works and provides the expected quality. Though it's just a question of time until this setup will break. Be it an incompatible ABI change in libtiff or another lib in use by those Canon provided tools. QGIS and gdal with ECW support While the printer stuff is a rather common use case, having a map viewer for map files in the ECW format is the eccentric part. I found some hints on stackoverflow and subsequently https://trac.osgeo.org/gdal/wiki/ECW that a non-free library is required and a specific build of gdal. Then QGIS should be able to work with ECW files. Lucky us there is at least a OpenSuSE repository for gdal and QGIS. So I did the following:
zypper addrepo http://download.opensuse.org/repositories/Application:/Geo/openSUSE_Leap_42.2/Application:Geo.repo
zypper install qgis
Then I had to download the non-free ECW SDK from http://download.hexagongeospatial.com/downloads/erdas-ecw-jp2-sdk-v5.3-%28linux%29 - you'll and up with a '.bin' installer file. The installation process left me with "ERDAS-ECW_JPEG_2000_SDK-5.3.0" folder in my $HOME. I moved that one to /opt. Next step is adding the library to the ldconfig search path.
echo "/opt/ERDAS-ECW_JPEG_2000_SDK-5.3.0/Desktop_Read-Only/lib/x64/release/" > /etc/ld.so.conf.d/ecw.conf; ldconfig
Now it was "just" about rebuild gdal with ECW support. So I downloaded the required source packages with "zypper source-install gdal", edited the spec somewhere in "/usr/src/" to make the following modifications
--with-ecw=/opt/ERDAS-ECW_JPEG_2000_SDK-5.3.0/Desktop_Read-Only
added to the "./configure" invocation. And somewhere at the top we had to relax the requirement that all installed files have to be referenced inside the package.
%define _unpackaged_files_terminate_build 0
As a last step I had to "rpmbuild -ba" the package and force the installation via zypper once more, because this time we have a file depedency on the libecw stuff and it's obviously not listed in the rpm database. Last but not least I tried to put the gdal build on hold with
zypper addlock gdal libgdal20
to ensure it's not removed on the next update. Other non-free tools Beside of those two issues I had to install a range of other non-free tools, but currently they work without further issues or modifications. One is Teamviewer (i686 multiarch rpm) and the other one is XnViewMP. XnView is also able to show ECW files, but only the smaller ones. It crashes on bigger ones but that's also the case on Windows. Then there is also (required by some Italian map related websites) the ugly Adobe Flash Plugin for Firefox, but that one is sadly still a widespread issue. We also tried to try out the nvidia graphic drivers but at the moment we could only get the build in Intel card to work. Usually the preferred solution from my point of view but sometimes we see rendering glitches and I'm not sure if it's the driver or something else. my personal take away I hate to admit it but it's nothing extraordinary that was requested here. But still it took me the better part of two evenings to figure everything out. And even now it's not properly integrated and doomed to fail any day due to various updates and changes in the surounding ecosystem. I've full sympathy for every average user that would give up after two hours of research and try&error on this journey. For the printer drivers I'm happy to blame Canon. The printer situation as a whole improved from my point of view during the last decade, but it's still a pain in the ass with the very short shelf life you usually see with consumer models. For the ECW case one could discuss if it would be legally possible and helpful to do ugly dlopen() stuff to dynamcially load the shared libs. But then again someone has to make his hands dirty during the build and discussions about the legal use of header files will be the next chapter (hello Oracle). It's just ugly. Actually I know too little about the world of image formats to judge if someone has a good reason to keep this format commercial or not. From my personal point of view it's not useful and maybe even morally wrong. Technically one could argue if it would make sense to keep a local copy of the gdal build in "/opt" and start QGIS with a modified library path to prefer the private gdal build. Not sure if that is any better. On the other hand there are evolving mechanism like flatpack that would ease the handling of such situations. Buth then again we would be catering non-free software. It feels a lot like giving up. While my private working environment is except for firmware blobs free, I now created for someone a real "FrankenSuSE" to satisfy his everyday needs. On the one hand we now have another mostly satisfied user of a mostly free operating system. On the other hand that was only possible by adding a vast amount of non-free software. For sure we did not win the war, I'm not even sure if we've won a single battle here. It's just frustrating to see what is required to get someone up and running. With my personal attitude towards open source software it even feels wrong to invest so much time into fiddling with non-free components. What is still missing We currently lack an image viewer that allows us to print only a selection of an image, which is useful to print parts of a map. That usually works with XnView on Windows but does not work with the Linux version at the moment. I also tried gwenview and geeqie and had the same issue. Not sure if it's maybe a bug in XnView or one of the Qt parts (gwenview is also Qt based). I did not research that yet. Update: I spent quite some time looking into open bug reports for geeqie and gwenview. Seems the feature to print only a section of an image is something new. I've created #374299 (gwenview) and #457 (geeqie). For XnView I expect it's a difference between XnViewMP (the portable version) and the Windows only XnView Classic. Needs to be clarified and it might be worth to try XnView Classic with wine. Maybe printing with wine via cups works, I found at least some results for it on the internet.

26 July 2016

Rhonda D'Vine: Debian LGBTIQA+

I have a long overdue blog entry about what happened in recent times. People that follow my tweets did catch some things. Most noteworthy there was the Trans*Inter*Congress in Munich at the start of May. It was an absolute blast. I met so many nice and great people, talked and experienced so many great things there that I'm still having a great motivational push from it every time I think back. It was also the time when I realized that I in fact do have body dysphoria even though I thought I'm fine with my body in general: Being tall is a huge issue for me. Realizing that I have a huge issue (yes, pun intended) with my length was quite relieving, even though it doesn't make it go away. It's something that makes passing and transitioning for me harder. I'm well aware that there are tall women, and that there are dedicated shops for lengthy women, but that's not the only thing that I have trouble with. What bothers me most is what people read into tall people: that they are always someone they can lean on for comfort, that tall people are always considered to be self confident and standing up for themselves (another pun, I know ... my bad). And while I'm fine with people coming to me for leaning on to, I rarely get the chance to do so myself. And people don't even consider it. When I was there in Munich, talking with another great (... pun?) trans woman who was as tall as me I finally had the possibility to just rest my head on her shoulder and finally feel the comfort I need just as much as everyone else out there, too. Probably that's also the reason why I'm so touchy and do go Free Hugging as often as possible. But being tall also means that you are usually only the big spoon when cuddling up. Having a small mental breakdown because of realizing that didn't change the feeling directly but definitely helped with looking for what I could change to fix that for myself. Then, at the end of may, the movie FtWTF - female to what the fuck came to cinema. It's a documentary about six people who got assigned female at birth. And it's absolutely charming, and has great food for thoughts in it. If you ever get the chance to watch it you definitely should. And then came debconf16 in Capetown. The flight to there was canceled and we had to get rebooked. The first offer was to go through Dubai, and gladly a colleague did point out to the person behind the desk that that wouldn't be safe for myself and thus out of scope. In the end we managed to get to Capetown quite nice, and even though it was winter when the sun was shining it was quite nice. Besides the cold nights that is. Or being stuck on the way up to table mountain because a colleague had cramps in his lags and we had to call mountain rescue. Gladly the night was clear, and when the mountain rescue finally got us to top and it was night already we had one of the nicest views from up there most people probably never will experience. And then ... I got invited to a trans meetup in Capetown. I was both excited and nervous about it, what to expect there. But it was simply great. The group there was simply outstandingly great. The host gave update information on progress on clinical support within south Africa, from what I took with me is that there is only one clinic there for SRS which manages only two people a year which is simply ... yuck. Guess you can guess how many years (yes, decades) the waiting line is ... I was blown away though by the diversity of the group, on so many levels, most notably on the age spectrum. It was a charm to meet you all there! If you ever stop by in Capetown and you are part of the LGBTIQ community, make sure you get in contact with the Triangle Project. But, about the real reason to write this entry: I was approached at Debconf by at least two people who asked me what I thought about creating an LGBTIQA+ group within Debian, and if I'd like to push for that. Actually I think it would be a good idea to have some sort of exchange between people on the queer spectrum (and I hope I don't offend anyone with just saying queer for LGBTIQA+ people). Given that I'm quite outspoken people approach me every now and then so I'm aware that there is a fair amount of people that would fall into that category. On the other hand some of them wouldn't want to have it publicly known because it shouldn't matter and isn't really the business of others. So I'm uncertain. If we follow that path I guess something that is closed or at least offers the possibility to have a closed communication would be needed to not out someone by just joining in the discussion. It's was easier with Debian Women where it was (somewhat) clear that male participants are allies supporting the cause and not considered being women themselves, but often enough (mostly cis hetero male) people are afraid to join a dedicated LGBTIQA+ group because they have the fear of having their identity judged. These things should be considered before creating such a place so that people can feel comfortable when joining and know what to expect beforehand. For the time being I created #debian-diversity on irc.debian.org to discuss how to move forward. Please bear in mind that even the channel name is up for discussion. Acronyms might not be the way to go in my opinion, just read back up the discussion that lead to the Diversity Statement of Debian where the original approach was to start listing groups for inclusiveness but it was quickly clear that it can get outdated too easily. I am willing to be part of that effort, but right now I have some personal things to deal which eat up a fair amount of my time. My kid starts school in September (yes, it's that long already, time flies ...). And it looks like I'll have to move a second time in the near future: I'll have to leave my current flat by the end of the year and the Que[e]rbau I'm moving into won't be ready by that time to host me yet ... F*ck. :(

/personal permanent link Comments: 1 Flattr this

10 May 2016

Reproducible builds folks: Reproducible builds: week 54 in Stretch cycle

What happened in the Reproducible Builds effort between May 1st and May 7th 2016: Media coverage There has been a surprising tweet last week: "Props to @FiloSottile for his nifty gvt golang tool. We're using it to get reproducible builds for a Zika & West Nile monitoring project." and to our surprise Kenn confirmed privately that he indeed meant "reproducible builds" as in "bit by bit identical builds". Wow. We're looking forward to learn more details about this; for now we just know that they are doing this for software quality reasons basically. Two of the four GSoC and Outreachy participants for Reproducible builds posted their introductions to Planet Debian: Toolchain fixes and other upstream developments dpkg 1.18.5 was uploaded fixing two bugs relevant to us: This upload made it necessary to rebase our dpkg on the version on sid again, which Niko Tyni and Lunar promptly did. Then a few days later 1.18.6 was released to fix a regression in the previous upload, and Niko promptly updated our patched version again. Following this Niko Tyni found #823428: "dpkg: many packages affected by dpkg-source: error: source package uses only weak checksums". Alexis Bienven e worked on tex related packages and SOURCE_DATE_EPOCH: Emmanuel Bourg uploaded jflex/1.4.3+dfsg-2, which removes timestamps from generated files. Packages fixed The following 285 packages have become reproducible due to changes in their build dependencies (mostly from GCC honouring SOURCE_DATE_EPOCH, see the previous week report): 0ad abiword abcm2ps acedb acpica-unix actiona alliance amarok amideco amsynth anjuta aolserver4-nsmysql aolserver4-nsopenssl aolserver4-nssqlite3 apbs aqsis aria2 ascd ascii2binary atheme-services audacity autodocksuite avis awardeco bacula ballerburg bb berusky berusky2 bindechexascii binkd boinc boost1.58 boost1.60 bwctl cairo-dock cd-hit cenon.app chipw ckermit clp clustalo cmatrix coinor-cbc commons-pool cppformat crashmail crrcsim csvimp cyphesis-cpp dact dar darcs darkradiant dcap dia distcc dolphin-emu drumkv1 dtach dune-localfunctions dvbsnoop dvbstreamer eclib ed2k-hash edfbrowser efax-gtk efax exonerate f-irc fakepop fbb filezilla fityk flasm flightgear fluxbox fmit fossil freedink-dfarc freehdl freemedforms-project freeplayer freeradius fxload gdb-arm-none-eabi geany-plugins geany geda-gaf gfm gif2png giflib gifticlib glaurung glusterfs gnokii gnubiff gnugk goaccess gocr goldencheetah gom gopchop gosmore gpsim gputils grcompiler grisbi gtkpod gvpe hardlink haskell-github hashrat hatari herculesstudio hpcc hypre i2util incron infiniband-diags infon ips iptotal ipv6calc iqtree jabber-muc jama jamnntpd janino jcharts joy2key jpilot jumpnbump jvim kanatest kbuild kchmviewer konclude krename kscope kvpnc latexdiff lcrack leocad libace-perl libcaca libcgicc libdap libdbi-drivers libewf libjlayer-java libkcompactdisc liblscp libmp3spi-java libpwiz librecad libspin-java libuninum libzypp lightdm-gtk-greeter lighttpd linpac lookup lz4 lzop maitreya meshlab mgetty mhwaveedit minbif minc-tools moc mrtrix mscompress msort mudlet multiwatch mysecureshell nifticlib nkf noblenote nqc numactl numad octave-optim omega-rpg open-cobol openmama openmprtl openrpt opensm openvpn openvswitch owx pads parsinsert pcb pd-hcs pd-hexloader pd-hid pd-libdir pear-channels pgn-extract phnxdeco php-amqp php-apcu-bc php-apcu php-solr pidgin-librvp plan plymouth pnscan pocketsphinx polygraph portaudio19 postbooks-updater postbooks powertop previsat progressivemauve puredata-import pycurl qjackctl qmidinet qsampler qsopt-ex qsynth qtractor quassel quelcom quickplot qxgedit ratpoison rlpr robojournal samplv1 sanlock saods9 schism scorched3d scummvm-tools sdlbasic sgrep simh sinfo sip-tester sludge sniffit sox spd speex stimfit swarm-cluster synfig synthv1 syslog-ng tart tessa theseus thunar-vcs-plugin ticcutils tickr tilp2 timbl timblserver tkgate transtermhp tstools tvoe ucarp ultracopier undbx uni2ascii uniutils universalindentgui util-vserver uudeview vfu virtualjaguar vmpk voms voxbo vpcs wipe x264 xcfa xfrisk xmorph xmount xyscan yacas yasm z88dk zeal zsync zynaddsubfx Last week the 1000th bug usertagged "reproducible" was fixed! This means roughly 2 bugs per day since 2015-01-01. Kudos and huge thanks to everyone involved! Please also note: FTBFS packages have not been counted here and there are still 600 open bugs with reproducible patches provided. Please help bringing that number down to 0! The following packages have become reproducible after being fixed: Some uploads have fixed some reproducibility issues, but not all of them: Uploads which fix reproducibility issues, but currently FTBFS: Patches submitted that have not made their way to the archive yet: Package reviews 54 reviews have been added, 6 have been updated and 44 have been removed in this week. 18 FTBFS bugs have been reported by Chris Lamb, James Cowgill and Niko Tyni. diffoscope development Thanks to Mattia, diffoscope 52~bpo8+1 is available in jessie-backports now. tests.reproducible-builds.org Misc. This week's edition was written by Reiner Herrmann, Holger Levsen and Mattia Rizzolo and reviewed by a bunch of Reproducible builds folks on IRC. Mattia also wrote a small ikiwiki macro for this blog to ease linking reproducible issues, packages in the package tracker and bugs in the Debian BTS.

6 May 2016

Matthias Klumpp: Adventures in D programming

I recently wrote a bigger project in the D programming language, the appstream-generator (asgen). Since I rarely leave the C/C++/Python realm, and came to like many aspects of D, I thought blogging about my experience could be useful for people considering to use D. Disclaimer: I am not an expert on programming language design, and this is not universally valid criticism of D just my personal opinion from building one project with it. Why choose D in the first place? The previous AppStream generator was written in Python, which wasn t ideal for the task for multiple reasons, most notably multiprocessing and LMDB not working well together (and in general, multiprocessing being terrible to work with) and the need to reimplement some already existing C code in Python again. So, I wanted a compiled language which would work well together with the existing C code in libappstream. Using C was an option, but my least favourite one (writing this in C would have been much more cumbersome). I looked at Go and Rust and wrote some small programs performing basic operations that I needed for asgen, to get a feeling for the language. Interfacing C code with Go was relatively hard since libappstream is a GObject-based C library, I expected to be able to auto-generate Go bindings from the GIR, but there were only few outdated projects available which did that. Rust on the other hand required the most time in learning it, and since I only briefly looked into it, I still can t write Rust code without having the coding reference open. I started to implement the same examples in D just for fun, as I didn t plan to use D (I was aiming at Go back then), but the language looked interesting. The D language had the huge advantage of being very familiar to me as a C/C++ programmer, while also having a rich standard library, which included great stuff like std.concurrency.Generator, std.parallelism, etc. Translating Python code into D was incredibly easy, additionally a gir-d-generator which is actively maintained exists (I created a small fork anyway, to be able to directly link against the libappstream library, instead of dynamically loading it). What is great about D? This list is just a huge braindump of things I had on my mind at the time of writing  Interfacing with C There are multiple things which make D awesome, for example interfacing with C code and to a limited degree with C++ code is really easy. Also, working with functions from C in D feels natural. Take these C functions imported into D:
extern(C):
nothrow:
struct _mystruct  
alias mystruct_p = _mystruct*;
mystruct_p = mystruct_create ();
mystruct_load_file (mystruct_p my, const(char) *filename);
mystruct_free (mystruct_p my);
You can call them from D code in two ways:
auto test = mystruct_create ();
// treating "test" as function parameter
mystruct_load_file (test, "/tmp/example");
// treating the function as member of "test"
test.mystruct_load_file ("/tmp/example");
test.mystruct_free ();
This allows writing logically sane code, in case the C functions can really be considered member functions of the struct they are acting on. This property of the language is a general concept, so a function which takes a string as first parameter, can also be called like a member function of string. Writing D bindings to existing C code is also really simple, and can even be automatized using tools like dstep. Since D can also easily export C functions, calling D code from C is also possible. Getting rid of C++ cruft There are many things which are bad in C++, some of which are inherited from C. D kills pretty much all of the stuff I found annoying. Some cool stuff from D is now in C++ as well, which makes this point a bit less strong, but it s still valid. E.g. getting rid of the #include preprocessor dance by using symbolic import statements makes sense, and there have IMHO been huge improvements over C++ when it comes to metaprogramming. Incredibly powerful metaprogramming Getting into detail about that would take way too long, but the metaprogramming abilities of D must be mentioned. You can do pretty much anything at compiletime, for example compiling regular expressions to make them run faster at runtime, or mixing in additional code from string constants. The template system is also very well thought out, and never caused me headaches as much as C++ sometimes manages to do. Built-in unit-test support Unittesting with D is really easy: You just add one or more unittest blocks to your code, in which you write your tests. When running the tests, the D compiler will collect the unittest blocks and build a test application out of them. The unittest scope is useful, because you can keep the actual code and the tests close together, and it encourages writing tests and keep them up-to-date. Additionally, D has built-in support for contract programming, which helps to further reduce bugs by validating input/output. Safe D While D gives you the whole power of a low-level system programming language, it also allows you to write safer code and have the compiler check for that, while still being able to use unsafe functions when needed. Unfortunately, @safe is not the default for functions though. Separate operators for addition and concatenation D exclusively uses the + operator for addition, while the ~ operator is used for concatenation. This is likely a personal quirk, but I love it very much that this distinction exists. It s nice for things like addition of two vectors vs. concatenation of vectors, and makes the whole language much more precise in its meaning. Optional garbage collector D has an optional garbage collector. Developing in D without GC is currently a bit cumbersome, but these issues are being addressed. If you can live with a GC though, having it active makes programming much easier. Built-in documentation generator This is almost granted for most new languages, but still something I want to mention: Ddoc is a standard tool to generate code documentation for D code, with a defined syntax for describing function parameters, classes, etc. It will even take the contents of a unittest scope to generate automatic examples for the usage of a function, which is pretty cool. Scope blocks The scope statement allows one to execute a bit of code before the function exists, when it failed or was successful. This is incredibly useful when working with C code, where a free statement needs to be issued when the function is exited, or some arbitrary cleanup needs to be performed on error. Yes, we do have smart pointers in C++ and with some GCC/Clang extensions a similar feature in C too. But the scopes concept in D is much more powerful. See Scope Guard Statement for details. Built-in syntax for parallel programming Working with threads is so much more fun in D compared to C! I recommend taking a look at the parallelism chapter of the Programming in D book. Pure functions D allows to mark functions as purely-functional, which allows the compiler to do optimizations on them, e.g. cache their return value. See pure-functions. D is fast! D matches the speed of C++ in almost all occasions, so you won t lose performance when writing D code that is, unless you have the GC run often in a threaded environment. Very active and friendly community The D community is very active and friendly so far I only had good experience, and I basically came into the community asking some tough questions regarding distro-integration and ABI stability of D. The D community is very enthusiastic about pushing D and especially the metaprogramming features of D to its limits, and consists of very knowledgeable people. Most discussion happens at the forums/newsgroups at forum.dlang.org. What is bad about D? Half-proprietary reference compiler This is probably the biggest issue. Not because the proprietary compiler is bad per se, but because of the implications this has for the D ecosystem. For the reference D compiler, Digital Mars D (DMD), only the frontend is distributed under a free license (Boost), while the backend is proprietary. The FLOSS frontend is what the free compilers, LLVM D Compiler (LDC) and GNU D Compiler (GDC) are based on. But since DMD is the reference compiler, most features land there first, and the Phobos standard library and druntime is tuned to work with DMD first. Since major Linux distributions can t ship with DMD, and the free compilers GDC and LDC lack behind DMD in terms of language, runtime and standard-library compatibility, this creates a split world of code that compiles with LDC, GDC or DMD, but never with all D compilers due to it relying on features not yet in e.g. GDCs Phobos. Especially for Linux distributions, there is no way to say use this compiler to get the best and latest D compatibility . Additionally, if people can t simply apt install latest-d, they are less likely to try the language. This is probably mainly an issue on Linux, but since Linux is the place where web applications are usually written and people are likely to try out new languages, it s really bad that the proprietary reference compiler is hurting D adoption in that way. That being said, I want to make clear DMD is a great compiler, which is very fast and build efficient code. I only criticise the fact that it is the language reference compiler. UPDATE: To clarify the half-proprietary nature of the compiler, let me quote the D FAQ:
The front end for the dmd D compiler is open source. The back end for dmd is licensed from Symantec, and is not compatible with open-source licenses such as the GPL. Nonetheless, the complete source comes with the compiler, and all development takes place publically on github. Compilers using the DMD front end and the GCC and LLVM open source backends are also available. The runtime library is completely open source using the Boost License 1.0. The gdc and ldc D compilers are completely open sourced.
Phobos (standard library) is deprecating features too quickly This basically goes hand in hand with the compiler issue mentioned above. Each D compiler ships its own version of Phobos, which it was tested against. For GDC, which I used to compile my code due to LDC having bugs at that time, this means that it is shipping with a very outdated copy of Phobos. Due to the rapid evolution of Phobos, this meant that the documentation of Phobos and the actual code I was working with were not always in sync, leading to many frustrating experiences. Furthermore, Phobos is sometimes removing deprecated bits about a year after they have been deprecated. Together with the older-Phobos situation, you might find yourself in a place where a feature was dropped, but the cool replacement is not yet available. Or you are unable to import some 3rd-party code because it uses some deprecated-and-removed feature internally. Or you are unable to use other code, because it was developed with a D compiler shipping with a newer Phobos. This is really annoying, and probably the biggest source of unhappiness I had while working with D especially the documentation not matching the actual code is a bad experience for someone new to the language. Incomplete free compilers with varying degrees of maturity LDC and GDC have bugs, and for someone new to the language it s not clear which one to choose. Both LDC and GDC have their own issues at time, but they are rapidly getting better, and I only encountered some actual compiler bugs in LDC (GDC worked fine, but with an incredibly out-of-date Phobos). All issues are fixed meanwhile, but this was a frustrating experience. Some clear advice or explanation which of the free compilers is to prefer when you are new to D would be neat. For GDC in particular, being developed outside of the main GCC project is likely a problem, because distributors need to manually add it to their GCC packaging, instead of having it readily available. I assume this is due to the DRuntime/Phobos not being subjected to the FSF CLA, but I can t actually say anything substantial about this issue. Debian adds GDC to its GCC packaging, but e.g. Fedora does not do that. No ABI compatibility D has a defined ABI too bad that in reality, the compilers are not interoperable. A binary compiled with GDC can t call a library compiled with LDC or DMD. GDC actually doesn t even support building shared libraries yet. For distributions, this is quite terrible, because it means that there must be one default D compiler, without any exception, and that users also need to use that specific compiler to link against distribution-provided D libraries. The different runtimes per compiler complicate that problem further. The D package manager, dub, does not yet play well with distro packaging This is an issue that is important to me, since I want my software to be easily packageable by Linux distributions. The issues causing packaging to be hard are reported as dub issue #838 and issue #839, with quite positive feedback so far, so this might soon be solved. The GC is sometimes an issue The garbage collector in D is quite dated (according to their own docs) and is currently being reworked. While working with asgen, which is a program creating a large amount of interconnected data structures in a threaded environment, I realized that the GC is significantly slowing down the application when threads are used (it also seems to use UNIX signals SIGUSR1 and SIGUSR2 to stop/resume threads, which I still find odd). Also, the GC performed poorly on memory pressure, which did get asgen killed by the OOM killer on some more memory-constrained machines. Triggering a manual collection run after a large amount of these interconnected data structures wasn t needed anymore solved this problem for most systems, but it would of course have been better to not needing to give the GC any hints. The stop-the-world behavior isn t a problem for asgen, but it might be for other applications. These issues are at time being worked on, with a GSoC project laying the foundation for further GC improvements. version is a reserved word Okay, that is admittedly a very tiny nitpick, but when developing an app which works with packages and versions, it s slightly annoying. The version keyword is used for conditional compilation, and needing to abbreviate it to ver in all parts of the code sucks a little (e.g. the Package interface can t have a property version , but now has ver instead). The ecosystem is not (yet) mature In general it can be said that the D ecosystem, while existing for almost 9 years, is not yet that mature. There are various quirks you have to deal with when working with D code on Linux. It s always nothing major, usually you can easily solve these issues and go on, but it s annoying to have these papercuts. This is not something which can be resolved by D itself, this point will solve itself as more people start to use D and D support in Linux distributions gets more polished. Conclusion I like to work with D, and I consider it to be a great language the quirks it has in its toolchain are not that bad to prevent writing great things with it. At time, if I am not writing a shared library or something which uses much existing C++ code, I would prefer D for that task. If a garbage collector is a problem (e.g. for some real-time applications, or when the target architecture can t run a GC), I would not recommend to use D. Rust seems to be the much better choice then. In any case, D s flat learning curve (for C/C++ people) paired with the smart choices taken in language design, the powerful metaprogramming, the rich standard library and helpful community makes it great to try out and to develop software for scenarios where you would otherwise choose C++ or Java. Quite honestly, I think D could be a great language for tasks where you would usually choose Python, Java or C++, and I am seriously considering to replace quite some Python code with D code. For very low-level stuff, C is IMHO still the better choice. As always, choosing the right programming language is only 50% technical aspects, and 50% personal taste  UPDATE: To get some idea of D, check out the D tour on the new website tour.dlang.org.

16 April 2016

Matthias Klumpp: Introducing AppStream-Generator

AppStream GeneratorSince mid-2015 we were using the dep11-generator in Debian to build AppStream metadata about available software components in the distribution. Getting rid of dep11-generator Unfortunately, the old Python-based dep11-generator was hitting some hard limits pretty soon. For example, using multiprocessing with Python was a pain, since it resulted in some very hard-to-track bugs. Also, the multiprocessing approach (as opposed to multithreading) made it impossible to use the underlying LMDB database properly (it was basically closed and reopened in each forked off process, since pickling the Python LMDB object caused some really funny bugs, which usually manifested themselves in the application hanging forever without any information on what was going on). Additionally to that, the Python-based generator forced me to maintain two implementations of the AppStream YAML spec, one in C and one in Python, which consumes quite some time. There were also some other issues (e.g. no unit-tests) in the implementation, which made me think about rewriting the generator. Adventures in Go / Rust / D Since I didn t want to write this new piece of software in C (or basically, writing it in C was my last option  ), I explored Go and Rust for this purpose and also did a small prototype in the D programming language, when I was starting to feel really adventurous. And while I never intended to write the new generator in D (I was pretty fixated on Go ), this is what happened. The strong points for D for this particular project were its close relation to C (and ease of using existing C code), its super-flat learning curve for someone who knows and likes C and C++ and its pretty powerful implementations of the concurrent and parallel programming paradigms. That being said, not all is great in D and there are some pretty dark spots too, mainly when it comes to the standard library and compilers. I will dive into my experiences with D in a separate blogpost. What good to expect from appstream-generator? So, what can the new appstream-generator do for you? Basically, the same as the old dep11-generator: It will extract metadata from a distribution s package archive, download and resize screenshots, search for icons and size them properly and generate reports in JSON and HTML of found metadata and issues. LibAppStream-based parsing, generation of YAML or XML, multi-distro support, As opposed to the old generator, the new generator utilizes the metadata parsers and writers of libappstream. This allows it to return the extracted metadata as AppStream YAML (for Debian) or XML (everyone else) It is also written in a distribution-agnostic way, so if someone wants to use it in a different distribution than Debian, this is possible now. It just requires a very small distribution-specific backend to be written, all of the details of the metadata extraction are abstracted away (just two interfaces need to be implemented). While I do not expect anyone except Debian to use this in the near future (most distros have found a solution to generate metadata already), the frontend-backend split is a much cleaner design than what was available in the previous code. It also allows to unit-test the code properly, without providing a Debian archive in the testsuite. Feature Flags, Optipng, The new generator also allows to enable and disable certain sets of features in a standardized way. E.g. Ubuntu uses a language-pack system for translations, which Debian doesn t use. Features like this can be implemented as disableable separate modules in the generator. We use this at time to e.g. allow descriptions from packages to be used as AppStream descriptions, or for running optipng on the generated PNG images and icons. No more Contents file dependency Another issue the old generator had was that it used the Contents file from the Debian archive to find matching icons for an application. We could never be sure whether the contents in the Contents file actually matched the contents of the package we were currently dealing with. What made things worse is that at Ubuntu, the archive software is only updating the Contents file weekly daily (while the generator might run multiple times a day), which has lead to software being ignored in the metadata, because icons could not yet be found. Even on Debian, with its quickly-updated Contents file, we could immediately see the effects of an out-of-date Contents file when updating it failed once. In the new generator, we read the contents of each package ourselves now and store them in a LMDB database, bypassing the Contents file and removing the whole class of problems resulting from missing or wrong contents-data. It can t all be good, right? That is true, there are also some known issues the new generator has: Large amounts of RAM required The better speed of the new generator comes at the cost of holding more stuff in RAM. Much more. When processing data from 5 architectures initially on Debian, the amount of required RAM might lie above 4GB, with the OOM killer sometimes being quicker than the garbage collector That being said, on subsequent runs the amount of required memory is much lower. Still, this is something I am working on to improve. What are symbolic links? To be faster, the appstream-generator will read the md5sum file in .deb packages instead of extracting the payload archive and reading its contents. Since the md5sums file does not list symbolic links, symlinks basically don t exist for the new generator. This is a problem for software symlinking icons or even .desktop files around, like e.g. LibreOffice does. I am still investigating how widespread the use of symlinks for icons and .desktop files is, but it looks like fixing packages (making them not-symlink stuff and rather move the files) might be the better approach than investing additional computing power to find symlinks or even switch back to parsing the Contents file. Input on this is welcome! Deploying asgen I finished the last pieces of the appstream-generator (together with doing lots of other cool things and talking to great people) at the GNOME Software Hackfest in London last week (detailed blogposts about things that happened there will follow many thanks once again for the Ubuntu community for sponsoring my attendance!). Since today, the new generator is running on the Debian infrastructure. If bigger issues are found, we can still roll back to the old code. I decided to deploy this faster, so we can get some good testing done before the Stretch release. Please report any issues you may find!

2 April 2016

Petter Reinholdtsen: syslog-trusted-timestamp - chain of trusted timestamps for your syslog

Two years ago, I had a look at trusted timestamping options available, and among other things noted a still open bug in the tsget script included in openssl that made it harder than necessary to use openssl as a trusted timestamping client. A few days ago I was told the Norwegian government office DIFI is close to releasing their own trusted timestamp service, and in the process I was happy to learn about a replacement for the tsget script using only curl:
openssl ts -query -data "/etc/shells" -cert -sha256 -no_nonce \
    curl -s -H "Content-Type: application/timestamp-query" \
         --data-binary "@-" http://zeitstempel.dfn.de > etc-shells.tsr
openssl ts -reply -text -in etc-shells.tsr
This produces a binary timestamp file (etc-shells.tsr) which can be used to verify that the content of the file /etc/shell with the calculated sha256 hash existed at the point in time when the request was made. The last command extract the content of the etc-shells.tsr in human readable form. The idea behind such timestamp is to be able to prove using cryptography that the content of a file have not changed since the file was stamped. To verify that the file on disk match the public key signature in the timestamp file, run the following commands. It make sure you have the required certificate for the trusted timestamp service available and use it to compare the file content with the timestamp. In production, one should of course use a better method to verify the service certificate.
wget -O ca-cert.txt https://pki.pca.dfn.de/global-services-ca/pub/cacert/chain.txt
openssl ts -verify -data /etc/shells -in etc-shells.tsr -CAfile ca-cert.txt -text
Wikipedia have a lot more information about trusted Timestamping and linked timestamping, and there are several trusted timestamping services around, both as commercial services and as free and public services. Among the latter is the zeitstempel.dfn.de service mentioned above and freetsa.org service linked to from the wikipedia web site. I believe the DIFI service should show up on https://tsa.difi.no, but it is not available to the public at the moment. I hope this will change when it is into production. The RFC 3161 trusted timestamping protocol standard is even implemented in LibreOffice, Microsoft Office and Adobe Acrobat, making it possible to verify when a document was created. I would find it useful to be able to use such trusted timestamp service to make it possible to verify that my stored syslog files have not been tampered with. This is not a new idea. I found one example implemented on the Endian network appliances where the configuration of such feature was described in 2012. But I could not find any free implementation of such feature when I searched, so I decided to try to build a prototype named syslog-trusted-timestamp. My idea is to generate a timestamp of the old log files after they are rotated, and store the timestamp in the new log file just after rotation. This will form a chain that would make it possible to see if any old log files are tampered with. But syslog is bad at handling kilobytes of binary data, so I decided to base64 encode the timestamp and add an ID and line sequence numbers to the base64 data to make it possible to reassemble the timestamp file again. To use it, simply run it like this:
syslog-trusted-timestamp /path/to/list-of-log-files
This will send a timestamp from one or more timestamp services (not yet decided nor implemented) for each listed file to the syslog using logger(1). To verify the timestamp, the same program is used with the --verify option:
syslog-trusted-timestamp --verify /path/to/log-file /path/to/log-with-timestamp
The verification step is not yet well designed. The current implementation depend on the file path being unique and unchanging, and this is not a solid assumption. It also uses process number as timestamp ID, and this is bound to create ID collisions. I hope to have time to come up with a better way to handle timestamp IDs and verification later. Please check out the prototype for syslog-trusted-timestamp on github and send suggestions and improvement, or let me know if there already exist a similar system for timestamping logs already to allow me to join forces with others with the same interest. As usual, if you use Bitcoin and want to show your support of my activities, please send Bitcoin donations to my address 15oWEoG9dUPovwmUL9KWAnYRtNJEkP1u1b.

28 March 2016

Rhonda D'Vine: Ich bin was ich bin

As my readers probably are well aware, I wrote my transgender coming out poem Mermaids over 10 years ago, to make it clear to people how I define, what I am and how I would hope they could accept me. I did put it publicly into my blog so I could point people to it. And I still do so regularly. It still comes from the bottom of my heart. And I am very happy that I got the chance to present it in a Poetry Slam last year, it was even recorded and uploaded to YouTube. There is just one thing that I was also told over the time every now and then by some people that I would have liked to understand what's going on: Why is it in English, my English isn't that good. My usual response was along the lines of that the events that triggered me writing it were in an international context and I wanted to make sure that they understood what I wrote. At that time I didn't realize that I am cutting out a different group of people from being able to understand what's going on inside me. So this year there was a similar event: the Flawless Poetry Slam which touched the topics of Feminist? Queer? Gender? Rolemodels? - Let's talk about it. I took that as motivation to finally write another text on the topic, and this time in German. Unfortunately though I wasn't able to present it that evening, I wasn't drawn for the lineup. But, I was told that there was another slam going on just last wednesday, so I went there ... and made it onto the stage! And this is the text that I presented there. I am uncertain how well online translators work for you, but I hope you get the core points if you don't understand German:
Ich bin was ich bin
F nf Worte mit wahrem Sinn:
Ich bin was ich bin Du denkst: "Mann im Rock?
Das ist ja wohl l cherlich,
der ist sicher schwul." "Fingernagellack?
Na da schau ich nicht mehr hin,
wer will das schon seh'n." Jedoch liegst du falsch,
Mit all deinen Punkten, denn:
Ich bin was ich bin. Ich bin Transgender
Und erlebe mich selber,
ich bin eine Frau. "Haha, eine Frau?
Wem willst du das weismachen?
Heb mal den Rock hoch!" Und wie ist's bei dir?
Was ist zwischen den Beinen?
Geht mich das nichts an? Warum fragst du mich?
Da ist's dann in Ordnung?
Oder vielleicht nicht? Ich bin was ich bin
F nf Worte mit ernstem Sinn:
Ich bin was ich bin Ich steh weiblich hier
Und das hier ist mein K rper
Mein Geschlecht ist's auch Oberfl chlichkeit
Das ist mein gr tes Problem
Schl gt mir entgegen Wenn ich mich ffne
Verst ndnis fast berall
Es wird akzeptiert Doch gelegentlich
und das schmerzt mich am meisten
sagt doch mal wer "er" Von Fremden? Egal
Doch hab ich mich ge ffnet
Ist es eine Qual "Ich seh dich als Mann"
Da ist, was es transportiert
Akzeptanz? Dahin Meine Pronomen
Wenn ihr ber mich redet
sind sie, ihr, ihres Ich leb was ich leb
F nf Worte mit tiefem Sinn:
Ich bin was ich bin "Doch, wie der erst spricht!
Ich meinte, wie sie denn spricht!
Das ist nicht normal." Ich schreib hier Haikus:
Japanische Gedichtsform
Mit fixem Versmars Sind f nf, sieben, f nf
Silben in jeder Zeile
Haikus sind simpel Probier es mal aus
Transportier eine Message
Es macht auch viel Spa Wortwahl ist wichtig
Ein guter Thesaurus hilft
Sei kurz und pr gnant Ich sag was ich sag
F nf Worte mit klugem Sinn:
Ich bin was ich bin Doch ich schweife ab
Verst ndnis fast berall?
Wird es akzeptiert? Erstaunlicherweise
Doch ich bin auch was and'res
Und hier geht's bergab Eine Sache gibt's
Die erw h'n ich besser nicht
f r die steck ich ein "Deshalb bin ich hier"
So der Titel eines Lieds
verfasst von Thomas D "Wenn ich erkl re
warum ich mich wie ern hr"
So weit komm ich nicht Man erw hnt Vegan
Die Intoleranz ist da
Man ist unten durch "Mangelerscheinung!"
"Das Essen meines Essens!"
Akzeptanz ade Hab 'ne Theorie:
Vegan sein: 'ne Entscheidung
Transgender sein nicht Mensch f hlt sich dann schlecht
dass bei sich selbst die Kraft fehlt
und greift damit an "Ich k nnte das nicht"
Ich verurteile dich nicht
Iss doch was du willst Ich zwing es nicht auf
Aber R cksicht w r schon fein
Statt nur Hohn und Schm h Ich ess was ich ess
F nf Worte zum nachdenken:
Ich bin was ich bin
Hope you get the idea. The audience definitely liked it, the jury wasn't so much on board but that's fine, it's five random people and it's mostly for fun anyway. Later that night though some things happened that didn't make me feel so comfortable anymore. I went to the loo, waiting in line with the other ladies, a bit later the waitress came along telling me "the men's room is over there". I told her that I'm aware of that and thanked her, which got her confused and said something along the lines of "so you are both, or what?" but went away after that. Her tone and response wasn't really giving me much comfort, though none of the other ladies in the line did look strangely.
But the most disturbing event after that was to find out about North Carolina signed the bathroom bill making it illegal for trans people to use the bathroom for their gender and insisting on using the one for the gender they were assigned at birth. So men like James Sheffield are now forced to go to the lady's restroom, or face getting arrested. Brave new world. :/ So, enjoy the text and don't get too wound up by stupid laws and hope for time to fix people's discriminatory minds for fixing issues that already are regulated: Assaults are assaults and are already banned. Arguing with people might get assaulted and thus discriminating trans people is totally missing the point, by miles.

/personal permanent link Comments: 2 Flattr this

2 January 2016

Daniel Pocock: The great life of Ian Murdock and police brutality in context

Tributes: (You can Follow or Tweet about this blog on Twitter) Over the last week, people have been saying a lot about the wonderful life of Ian Murdock and his contributions to Debian and the world of free software. According to one news site, a San Francisco police officer, Grace Gatpandan, has been doing the opposite, starting a PR spin operation, leaking snippets of information about what may have happened during Ian's final 24 hours. Sadly, these things are now starting to be regurgitated without proper scrutiny by the mainstream press (note the erroneous reference to SFGate with link to SFBay.ca, this is British tabloid media at its best). The report talks about somebody (no suggestion that it was even Ian) "trying to break into a residence". Let's translate that from the spin-doctor-speak back to English: it is the silly season, when many people have a couple of extra drinks and do silly things like losing their keys. "a residence", or just their own home perhaps? Maybe some AirBNB guest arriving late to the irritation of annoyed neighbours? Doesn't the choice of words make the motive sound so much more sinister? Nobody knows the full story and nobody knows if this was Ian, so snippets of information like this are inappropriate, especially when somebody is deceased. Did they really mean to leave people with the impression that one of the greatest visionaries of the Linux world was also a cat burglar? That somebody who spent his life giving selflessly and generously for the benefit of the whole world (his legacy is far greater than Steve Jobs, as Debian comes with no strings attached) spends the Christmas weekend taking things from other people's houses in the dark of the night? The report doesn't mention any evidence of a break-in or any charges for breaking-in. If having a few drinks and losing your keys in December is such a sorry state to be in, many of us could potentially be framed in the same terms at some point in our lives. That is one of the reasons I feel so compelled to write this: somebody else could be going through exactly the same experience at the moment you are reading this. Any of us could end up facing an assault as unpleasant as the tweets imply at some point in the future. At least I can console myself that as a privileged white male, the risk to myself is much lower than for those with mental illness, the homeless, transgender, Muslim or black people but as the tweets suggest, it could be any of us. The story reports that officers didn't actually come across Ian breaking in to anything, they encountered him at a nearby street corner. If he had weapons or drugs or he was known to police that would have almost certainly been emphasized. Is it right to rush in and deprive somebody of their liberties without first giving them an opportunity to identify themselves and possibly confirm if they had a reason to be there? The report goes on, "he was belligerent", "he became violent", "banging his head" all by himself. How often do you see intelligent and successful people like Ian Murdock spontaneously harming themselves in that way? Can you find anything like that in any of the 4,390 Ian Murdock videos on YouTube? How much more frequently do you see reports that somebody "banged their head", all by themselves of course, during some encounter with law enforcement? Do police never make mistakes like other human beings? If any person was genuinely trying to spontaneously inflict a head injury on himself, as the police have suggested, why wouldn't the police leave them in the hospital or other suitable care? Do they really think that when people are displaying signs of self-harm, rounding them up and taking them to jail will be in their best interests? Now, I'm not suggesting this started out with some sort of conspiracy. Police may have been at the end of a long shift (and it is a disgrace that many US police are not paid for their overtime) or just had a rough experience with somebody far more sinister. On the other hand, there may have been a mistake, gaps in police training or an inappropriate use of a procedure that is not always justified, like a strip search, that causes profound suffering for many victims. A select number of US police forces have been shamed around the world for a series of incidents of extreme violence in recent times, including the death of Michael Brown in Ferguson, shooting Walter Scott in the back, death of Freddie Gray in Baltimore and the attempts of Chicago's police to run an on-shore version of Guantanamo Bay. Beyond those highly violent incidents, the world has also seen the abuse of Ahmed Mohamed, the Muslim schoolboy arrested for his interest in electronics and in 2013, the suicide of Aaron Swartz which appears to be a direct consequence of the "Justice" department's obsession with him. What have the police learned from all this bad publicity? Are they changing their methods, or just hiring more spin doctors? If that is their response, then doesn't it leave them with a cruel advantage over those people who were deceased? Isn't it standard practice for some police to simply round up anybody who is a bit lost and write up a charge sheet for resisting arrest or assaulting an officer as insurance against questions about their own excessive use of force? When British police executed Jean Charles de Menezes on a crowded tube train and realized they had just done something incredibly outrageous, their PR office went to great lengths to try and protect their image, even photoshopping images of Menezes to make him look more like some other suspect in a wanted poster. To this day, they continue to refer to Menezes as a victim of the terrorists, could they be any more arrogant? While nobody believes the police woke up that morning thinking "let's kill some random guy on the tube", it is clear they made a mistake and like many people (not just police), they immediately prioritized protecting their reputation over protecting the truth. Nobody else knows exactly what Ian was doing and exactly what the police did to him. We may never know. However, any disparaging or irrelevant comments from the police should be viewed with some caution. The horrors of incarceration It would be hard for any of us to understand everything that an innocent person goes through when detained by the police. The recently released movie about The Stanford Prison Experiment may be an interesting place to start, a German version produced in 2001, Das Experiment, is also very highly respected. The United States has the largest prison population in the world and the second-highest per-capita incarceration rate. Many, including some on death row, are actually innocent, in the wrong place at the wrong time, without the funds to hire an attorney. The system, and the police and prison officers who operate it, treat these people as packages on a conveyor belt, without even the most basic human dignity. Whether their encounter lasts for just a few hours or decades, is it any surprise that something dies inside them when they discover this cruel side of American society? Worldwide, there is an increasing trend to make incarceration as degrading as possible. People may be innocent until proven guilty, but this hasn't stopped police in the UK from locking up and strip-searching over 4,500 children in a five year period, would these children go away feeling any different than if they had an encounter with Jimmy Saville or Rolf Harris? One can only wonder what they do to adults. What all this boils down to is that people shouldn't really be incarcerated unless it is clear the danger they pose to society is greater than the danger they may face in a prison. What can people do for Ian and for justice? Now that these unfortunate smears have appeared, it would be great to try and fill the Internet with stories of the great things Ian has done for the world. Write whatever you feel about Ian's work and your own experience of Debian. While the circumstances of the final tweets from his Twitter account are confusing, the tweets appear to be consistent with many other complaints about US law enforcement. Are there positive things that people can do in their community to help reduce the harm? Sending books to prisoners (the UK tried to ban this) can make a difference. Treat them like humans, even if the system doesn't. Recording incidents of police activities can also make a huge difference, such as the video of the shooting of Walter Scott or the UK police making a brutal unprovoked attack on a newspaper vendor. Don't just walk past a situation and assume everything is under control. People making recordings may find themselves in danger, it is recommended to use software that automatically duplicates each recording, preferably to the cloud, so that if the police ask you to delete such evidence, you can let them watch you delete it and still have a copy. Can anybody think of awards that Ian Murdock should be nominated for, either in free software, computing or engineering in general? Some, like the prestigious Queen Elizabeth Prize for Engineering can't be awarded posthumously but others may be within reach. Come and share your ideas on the debian-project mailing list, there are already some here. Best of all, Ian didn't just build software, he built an organization, Debian. Debian's principles have helped to unite many people from otherwise different backgrounds and carry on those principles even when Ian is no longer among us. Find out more, install it on your computer or even look for ways to participate in the project.

15 November 2015

Lunar: Reproducible builds: week 29 in Stretch cycle

What happened in the reproducible builds effort this week: Toolchain fixes Emmanuel Bourg uploaded eigenbase-resgen/1.3.0.13768-2 which uses of the scm-safe comment style by default to make them deterministic. Mattia Rizzolo started a new thread on debian-devel to ask a wider audience for issues about the -Wdate-time compile time flag. When enabled, GCC and clang print warnings when __DATE__, __TIME__, or __TIMESTAMP__ are used. Having the flag set by default would prompt maintainers to remove these source of unreproducibility from the sources. Packages fixed The following packages have become reproducible due to changes in their build dependencies: bmake, cyrus-imapd-2.4, drobo-utils, eigenbase-farrago, fhist, fstrcmp, git-dpm, intercal, libexplain, libtemplates-parser, mcl, openimageio, pcal, powstatd, ruby-aggregate, ruby-archive-tar-minitar, ruby-bert, ruby-dbd-odbc, ruby-dbd-pg, ruby-extendmatrix, ruby-rack-mobile-detect, ruby-remcached, ruby-stomp, ruby-test-declarative, ruby-wirble, vtprint. The following packages became reproducible after getting fixed: Some uploads fixed some reproducibility issues, but not all of them: Patches submitted which have not made their way to the archive yet: reproducible.debian.net The fifth and sixth armhf build nodes have been set up, resulting in five more builder jobs for armhf. More than 10,000 packages have now been identified as reproducible with the reproducible toolchain on armhf. (Vagrant Cascadian, h01ger) Helmut Grohne and Mattia Rizzolo now have root access on all 12 build nodes used by reproducible.debian.net and jenkins.debian.net. (h01ger) reproducible-builds.org is now linked from all package pages and the reproducible.debian.net dashboard. (h01ger) profitbricks-build5-amd64 and profitbricks-build6-amd64, responsible for running amd64 tests now run 398.26 days in the future. This means that one of the two builds that are being compared will be run on a different minute, hour, day, month, and year. This is not yet the case for armhf. FreeBSD tests are also done with 398.26 days difference. (h01ger) The design of the Arch Linux test page has been greatly improved. (Levente Polyak) diffoscope development Three releases of diffoscope happened this week numbered 39 to 41. It includes support for EPUB files (Reiner Herrmann) and Free Pascal unit files, usually having .ppu as extension (Paul Gevers). The rest of the changes were mostly targetting at making it easier to run diffoscope on other systems. The tlsh, rpm, and debian modules are now all optional. The test suite will properly skip tests that need optional tools or modules when they are not available. As a result, diffosope is now available on PyPI and thanks to the work of Levente Polyak in Arch Linux. Getting these versions in Debian was a bit cumbersome. Version 39 was uploaded with an expired key (according to the keyring on ftp.debian.org which will hopefully be updated soon) which is currently handled by keeping the files in the queue without REJECTing them. This prevented any other Debian Developpers to upload the same version. Version 40 was uploaded as a source-only upload but failed to build from source which had the undesirable side effect of removing the previous version from unstable. The package faild to build from source because it was built passing -I to debbuild. This excluded the ELF object files and static archives used by the test suite from the archive, preventing the test suite to work correctly. Hopefully, in a nearby future it will be possible to implement a sanity check to prevent such mistakes in the future. It has also been identified that ppudump outputs time in the system timezone without considering the TZ environment variable. Zachary Vance and Paul Gevers raised the issue on the appropriate channels. strip-nondeterminism development Chris Lamb released strip-nondeterminism version 0.014-1 which disables stripping Mono binaries as it is too aggressive and the source of the problem is being worked on by Mono upstream. Package reviews 133 reviews have been removed, 115 added and 103 updated this week. Chris West and Chris Lamb reported 57 new FTBFS bugs. Misc. The video of h01ger and Chris Lamb's talk at MiniDebConf Cambridge is now available. h01ger gave a talk at CCC Hamburg on November 13th, which was well received and sparked some interest among Gentoo folks. Slides and video should be available shortly. Frederick Kautz has started to revive Dhiru Kholia's work on testing Fedora packages. Your editor wish to once again thank #debian-reproducible regulars for reviewing these reports weeks after weeks.

14 October 2015

Rhonda D'Vine: Post DebConf15

There are some things that I didn't mention in my sort-of quickly written entry about DebConf15. So first things first. When I received the mail about the room allocation I was at first confused. I was put into a room with other ladies, which I didn't expect. Granted, two of the other three names were people that knew me since a while, but it still felt like a mistake might have happened. But after a while I realized what has happened: It wasn't a mistake, it was intentional, I was finally recognized as woman for the room allocation too, which made me extremely happy. I was just just concerned about the third person who would be in our room who doesn't know me yet and whether it would make them feel uncomfortable. In the end, that was no trouble at all. I felt so empowered and more accepted than ever in this community. And when finally being on-site there another thing happened with me. I started to use the women's restroom. Up to now I usually had the feeling of "it's fine for me to use the male one, and I don't want other women to feel uncomfortable", but somehow, with a skirt on, it in the end made me feeling uncomfortable. Additionally, there were only three times in total when I used the male toilet (and one was on the boat for the daytrip), and every single time of it I felt extremely uncomfortable with it, like others might think I'm just faking it. It at least in my mind doesn't help with accepting me as female when I go to the male restroom. And it's not a Good choice! as a woman did put it during the conference dinner when there was a longer queue infront of the male restroom. It's not so much of a choice over here. But I give her the doubt of not knowing how important these little steps became to me over time. Totally unrelated to the restroom question but interestingly featuring it a fair bit I was made aware of the Assigned Male cartoon. I instantly fell in love with it, and in case you want to enlighten yourself a bit more about how some things you might say or do get received by trans people, be very much invited to read it. Sophie is currently on European tour with her book, unfortunately Vienna/Austria doesn't seem to be part of europe in that respect so I hope someone will be able to visit one of her stations to pick up a book for me ... And then there was also a small inofficial Nail Polish BoF going on at DebConf. I left it on my fingers for the next two weeks, totally in love with it. Unfortunately the nail polish I got for myself after DebConf had a rather big brush so I wasn't able to work on it, I failed miserably. ... which brings me to the empowerment that DebConf meant for me this year, and the time since. Given that I left the nail polish on I even took the comfort in being myself to go to work in my skirt on a more regular basis. Also, a very nice friend did visit me and we went lipstick shopping. I loved the color she chose, even though in the meantime it isn't visible enough for me and I guess I'll get another one rather sooner than later. Also, about what I mentioned in my last blog post was that my name change within the Debian project was granted. A quick update on that is that also now my GPG key got replaced. I guess it's finally time for me to write a gpg transition statement, even though I don't follow those myself. I still prefer meeting up with people face-to-face for signing their new keys. But given that it's called a transition statement makes it more appealing to me on that grounds. :) And I got invited to a local podcast show. Actually I know the person who does the podcast since several years already, he's also part of the local free software community who attends various events, and he does a podcast since several years now called Biertaucher (named after cooling the beer in a fountain). It is held in German language, so if you don't understand German you might want to skip these links.
In the first episode that I joined in I talked about DebConf. Afterwards we were sitting together and talking about that they would like to have more social topics too, not just technical things. So we took that chance and talked in Biertaucher #221 about Polyamory, which was a quite interesting experience. The host intentionally asked questions coming from a quite ignorant point of view, but it went nice. We were three poly people sharing our views and insight how it works for us.
Then there was Biertaucher #223 where it was just me and one of the hosts. We didn't had much to talk about from the past week, so we agreed to talk about Transgender in the end. Granted, it's mostly my personal story, but I guess I got some important topics addressed in a useful way. And, after getting my name changed in Debian, I thought about what it might take to get my name changed officially, too (as if it could get more official than using it throughout my work environment, both payed and voluntary, but ...). I covered that in the podcast, but mostly it is either quite expensive or requires me to change my gender in the register of births, which require a lot of other hassles that include psychiatry. Or, settle for a so-called "gender neutral" name as first name, which both doesn't sound very convincing somehow ... Only time can tell I guess. Guess that's enough for now, if I forgot something I might come back to it. :) One last note: I consider the Debian project a very welcoming one, and that can only work for a fair amount of people if the tone is right. So yes, I wholeheartly agree with the Code of Conduct. And I'm very disappointed to see that there are still people in the project that are advocating for a freedom of expression, so to say. Respectful communication with each other is a must in a bigger community to make it work, not something that might be a nice to have, and calling someone names and ridiculing them for stating that is absolutely not acceptable. I encourage those people to watch How to Throughly Offend and Insult People in Open Source presentation (or at least read the slides) that Gina Likins gave earlier this year. It might give them an idea why it's important to communicate respectful with each other, and that includes banning degrading terms like "SJW" from your vocabulary because it actually speaks a lot more about your own attitude than about the one of the person you use it for.

/personal permanent link Comments: 8 Flattr this

27 September 2015

Lunar: Reproducible builds: week 22 in Stretch cycle

What happened in the reproducible builds effort this week: Toolchain fixes Packages fixed The following 22 packages became reproducible due to changes in their build dependencies: breathe, cdi-api, geronimo-jpa-2.0-spec, geronimo-validation-1.0-spec, gradle-propdeps-plugin, jansi, javaparser, libjsr311-api-java, mac-widgets, mockito, mojarra, pastescript, plexus-utils2, powerline, python-psutil, python-sfml, python-tldap, pythondialog, tox, trident, truffle, zookeeper. The following packages became reproducible after getting fixed: Some uploads fixed some reproducibility issues but not all of them: Patches submitted which have not made their way to the archive yet: diffoscope development The changes to make diffoscope run under Python 3, along with many small fixes, entered the archive with version 35 on September 21th. Another release was made the very next day fixed two encoding-related issues discovered when running diffoscope on more Debian packages. strip-nondeterminism development Version 0.12.0 now preserves file permissions on modified zip files and dh_strip_nondeterminism has been made compatible with older debhelper. disorderfs development Version 0.3.0 implemented a multi-user mode that was required to build Debian packages using disorderfs. It also added command line options to control the ordering of files in directory (either shuffled or reversed) and another to do arbitrary changes to the reported space used by files on disk. A couple days later, version 0.4.0 was released to support locks, flush, fsync, fsyncdir, read_buf, and write_buf. Almost all known issues have now been fixed. reproducible.debian.net disorderfs is now used during the second build. This makes file ordering issue very easy to identify as such. (h01ger) Work has been done on making the distributed build setup more reliable. (h01ger) Documentation update Matt Kraii fixed the example on how to fix issues related to dates in Sphinx. Recent Sphinx versions should also be compatible with SOURCE_DATE_EPOCH. Package reviews 53 reviews have been removed, 85 added and 13 updated this week. 46 packages failing to build from source has been identified by Chris Lamb, Chris West, and Niko Tyni. Chris Lamb was the lucky reporter of bug #800000 on vdr-plugin-prefermenu. Issues related to disorderfs are being tracked with a new issue.

21 September 2015

Lunar: Reproducible builds: week 21 in Stretch cycle

If you see someone on the Debian ReproducibleBuilds project, buy him/her a beer. This work is awesome. What happened in the reproducible builds effort this week: Media coverage Nathan Willis covered our DebConf15 status update in Linux Weekly News. Access to non-LWN subscribers will be given on Thursday 24th. Linux Journal published a more general piece last Tuesday. Unexpected praise for reproducible builds appeared this week in the form of several iOS applications identified as including spyware. The malware was undetected by Apple screening. This actually happened because application developers had simply downloaded a trojaned version of XCode through an unofficial source. While reproducible builds can't really help users of non-free software, this is exactly the kind of attacks that we are trying to prevent in our systems. Toolchain fixes Niko Tyni wrote and uploaded a better patch for the source order problem in libmodule-build-perl. Tristan Seligmann identified how the code generated by python-cffi could be emitted in random order in some cases. Upstream has already fixed the problem. Packages fixed The following 24 packages became reproducible due to changes in their build dependencies: apache-curator, checkbox-ng, gant, gnome-clocks, hawtjni, jackrabbit, jersey1, libjsr305-java, mathjax-docs, mlpy, moap, octave-geometry, paste, pdf.js, pyinotify, pytango, python-asyncssh, python-mock, python-openid, python-repoze.who, shadow, swift, tcpwatch-httpproxy, transfig. The following packages became reproducible after getting fixed: Some uploads fixed some reproducibility issues but not all of them: Patches submitted which have not made their way to the archive yet: reproducible.debian.net Tests for Coreboot, OpenWrt, NetBSD, and FreeBSD now runs weekly (instead of monthly). diffoscope development Python 3 offers new features (namely yield from and concurrent.futures) that could help implement parallel processing. The clear separation of bytes and unicode strings is also likely to reduce encoding related issues. Mattia Rizolo thus kicked the effort of porting diffoscope to Python 3. tlsh was the only dependency missing a Python 3 module. This got quickly fixed by a new upload. The rest of the code has been moved to the point where only incompatibilities between Python 2.7 and Pyhon 3.4 had to be changed. The commit stream still require some cleanups but all tests are now passing under Python 3. Documentation update The documentation on how to assemble the weekly reports has been updated. (Lunar) The example on how to use SOURCE_DATE_EPOCH with CMake has been improved. (Ben Beockel, Daniel Kahn Gillmor) The solution for timestamps in man pages generated by Sphinx now uses SOURCE_DATE_EPOCH. (Mattia Rizzolo) Package reviews 45 reviews have been removed, 141 added and 62 updated this week. 67 new FTBFS reports have been filled by Chris Lamb, Niko Tyni, and Lisandro Dami n Nicanor P rez Meyer. New issues added this week: randomness_in_r_rdb_rds_databases, python-ply_compiled_parse_tables. Misc. The prebuilder script is now properly testing umask variations again. Santiago Villa started a discussion on debian-devel on how binNMUs would work for reproducible builds.

20 June 2015

Lunar: Reproducible builds: week 4 in Stretch cycle

What happened about the reproducible builds effort for this week: Toolchain fixes Lunar rebased our custom dpkg on the new release, removing a now unneeded patch identified by Guillem Jover. An extra sort in the buildinfo generator prevented a stable order and was quickly fixed once identified. Mattia Rizzolo also rebased our custom debhelper on the latest release. Packages fixed The following 30 packages became reproducible due to changes in their build dependencies: animal-sniffer, asciidoctor, autodock-vina, camping, cookie-monster, downthemall, flashblock, gamera, httpcomponents-core, https-finder, icedove-l10n, istack-commons, jdeb, libmodule-build-perl, libur-perl, livehttpheaders, maven-dependency-plugin, maven-ejb-plugin, mozilla-noscript, nosquint, requestpolicy, ruby-benchmark-ips, ruby-benchmark-suite, ruby-expression-parser, ruby-github-markup, ruby-http-connection, ruby-settingslogic, ruby-uuidtools, webkit2gtk, wot. The following packages became reproducible after getting fixed: Some uploads fixed some reproducibility issues but not all of them: Patches submitted which did not make their way to the archive yet: Also, the following bugs have been reported: reproducible.debian.net Holger Levsen made several small bug fixes and a few more visible changes: strip-nondeterminism Version 0.007-1 of strip-nondeterminism the tool to post-process various file formats to normalize them has been uploaded by Holger Levsen. Version 0.006-1 was already in the reproducible repository, the new version mainly improve the detection of Maven's pom.properties files. debbindiff development At the request of Emmanuel Bourg, Reiner Herrmann added a comparator for Java .class files. Documentation update Christoph Berg created a new page for the timestamps in manpages created by Doxygen. Package reviews 93 obsolete reviews have been removed, 76 added and 43 updated this week. New identified issues: timestamps in manpages generated by Doxygen, modification time differences in files extracted by unzip, tstamp task used in Ant build.xml, timestamps in documentation generated by ASDocGen. The description for build id related issues has been clarified. Meetings Holger Levsen announced a first meeting on Wednesday, June 3rd, 2015, 19:00 UTC. The agenda is amendable on the wiki. Misc. Lunar worked on a proof-of-concept script to import the build environment found in .buildinfo files to UDD. Lucas Nussbaum has positively reviewed the proposed schema. Holger Levsen cleaned up various experimental toolchain repositories, marking merged brances as such.

Lunar: Reproducible builds: week 5 in Stretch cycle

What happened about the reproducible builds effort for this week: Toolchain fixes Uploads that should help other packages: Patch submitted for toolchain issues: Some discussions have been started in Debian and with upstream: Packages fixed The following 8 packages became reproducible due to changes in their build dependencies: access-modifier-checker, apache-log4j2, jenkins-xstream, libsdl-perl, maven-shared-incremental, ruby-pygments.rb, ruby-wikicloth, uimaj. The following packages became reproducible after getting fixed: Some uploads fixed some reproducibility issues but not all of them: Patches submitted which did not make their way to the archive yet: Discussions that have been started: reproducible.debian.net Holger Levsen added two new package sets: pkg-javascript-devel and pkg-php-pear. The list of packages with and without notes are now sorted by age of the latest build. Mattia Rizzolo added support for email notifications so that maintainers can be warned when a package becomes unreproducible. Please ask Mattia or Holger or in the #debian-reproducible IRC channel if you want to be notified for your packages! strip-nondeterminism development Andrew Ayer fixed the gzip handler so that it skip adding a predetermined timestamp when there was none. Documentation update Lunar added documentation about mtimes of file extracted using unzip being timezone dependent. He also wrote a short example on how to test reproducibility. Stephen Kitt updated the documentation about timestamps in PE binaries. Documentation and scripts to perform weekly reports were published by Lunar. Package reviews 50 obsolete reviews have been removed, 51 added and 29 updated this week. Thanks Chris West and Mathieu Bridon amongst others. New identified issues: Misc. Lunar will be talking (in French) about reproducible builds at Pas Sage en Seine on June 19th, at 15:00 in Paris. Meeting will happen this Wednesday, 19:00 UTC.

8 June 2015

Lunar: Reproducible builds: week 6 in Stretch cycle

What happened about the reproducible builds effort for this week: Presentations On May 26th,Holger Levsen presented reproducible builds in Debian at CCC Berlin for the Datengarten 52. The presentation was in German and the slides in English. Audio and video recordings are available. Toolchain fixes Niels Thykier fixed the experimental support for the automatic creation of debug packages in debhelper that being tested as part of the reproducible toolchain. Lunar added to the reproducible build version of dpkg the normalization of permissions for files in control.tar. The patch has also been submitted based on the main branch. Daniel Kahn Gillmor proposed a patch to add support for externally-supplying build date to help2man. This sparkled a discussion about agreeing on a common name for an environment variable to hold the date that should be used. It seems opinions are converging on using SOURCE_DATE_UTC which would hold a ISO-8601 formatted date in UTC) (e.g. 2015-06-05T01:08:20Z). Kudos to Daniel, Brendan O'Dea, Ximin Luo for pushing this forward. Lunar proposed a patch to Tar upstream adding a --clamp-mtime option as a generic solution for timestamp variations in tarballs which might also be useful for dpkg. The option changes the behavior of --mtime to only use the time specified if the file mtime is newer than the given time. So far, upstream is not convinced that it would make a worthwhile addition to Tar, though. Daniel Kahn Gillmor reached out to the libburnia project to ask for help on how to make ISO created with xorriso reproducible. We should reward Thomas Schmitt with a model upstream trophy as he went through a thorough analysis of possible sources of variations and ways to improve the situation. Most of what is missing with the current version in Debian is available in the latest upstream version, but libisoburn in Debian needs help. Daniel backported the missing option for version 1.3.2-1.1. akira submitted a new issue to Doxygen upstream regarding the timestamps added to the generated manpages. Packages fixed The following 49 packages became reproducible due to changes in their build dependencies: activemq-protobuf, bnfc, bridge-method-injector, commons-exec, console-data, djinn, github-backup, haskell-authenticate-oauth, haskell-authenticate, haskell-blaze-builder, haskell-blaze-textual, haskell-bloomfilter, haskell-brainfuck, haskell-hspec-discover, haskell-pretty-show, haskell-unlambda, haskell-x509-util, haskelldb-hdbc-odbc, haskelldb-hdbc-postgresql, haskelldb-hdbc-sqlite3, hasktags, hedgewars, hscolour, https-everywhere, java-comment-preprocessor, jffi, jgit, jnr-ffi, jnr-netdb, jsoup, lhs2tex, libcolor-calc-perl, libfile-changenotify-perl, libpdl-io-hdf5-perl, libsvn-notify-mirror-perl, localizer, maven-enforcer, pyotherside, python-xlrd, python-xstatic-angular-bootstrap, rt-extension-calendar, ruby-builder, ruby-em-hiredis, ruby-redcloth, shellcheck, sisu-plexus, tomcat-maven-plugin, v4l2loopback, vim-latexsuite. The following packages became reproducible after getting fixed: Some uploads fixed some reproducibility issues but not all of them: Patches submitted which did not make their way to the archive yet: Daniel Kahn Gilmor also started discussions for emacs24 and the unsorted lists in generated .el files, the recording of a PID number in lush, and the reproducibility of ISO images in grub2. reproducible.debian.net Notifications are now sent when the build environment for a package has changed between two builds. This is a first step before automatically building the package once more. (Holger Levsen) jenkins.debian.net was upgraded to Debian Jessie. (Holger Levsen) A new variation is now being tested: $PATH. The second build will be done with a /i/capture/the/path added. (Holger Levsen) Holger Levsen with the help of Alexander Couzens wrote extra job to test the reproducibility of coreboot. Thanks James McCoy for helping with certificate issues. Mattia Rizollo made some more internal improvements. strip-nondeterminism development Andrew Ayer released strip-nondeterminism/0.008-1. This new version fixes the gzip handler so that it now skip adding a predetermined timestamp when there was none. Holger Levsen sponsored the upload. Documentation update The pages about timestamps in manpages generated by Doxygen, GHC .hi files, and Jar files have been updated to reflect their status in upstream. Markus Koschany documented an easy way to prevent Doxygen to write timestamps in HTML output. Package reviews 83 obsolete reviews have been removed, 71 added and 48 updated this week. Meetings A meeting was held on 2015-06-03. Minutes and full logs are available. It was agreed to hold such a meeting every two weeks for the time being. The time of the next meeting should be announced soon.

Next.

Previous.